-
Notifications
You must be signed in to change notification settings - Fork 12.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for QNX Neutrino to standard library #106673
Conversation
r? @m-ou-se (rustbot has picked a reviewer for you, use r? to override) |
Hey! It looks like you've submitted a new PR for the library teams! If this PR contains changes to any Examples of
|
8b3acd2
to
4ca0e8f
Compare
4ca0e8f
to
1d445c1
Compare
a8a2332
to
b1191fd
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// based on stable time. | ||
let now = Instant::now(); | ||
|
||
let timeout = SystemTime::now() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This timeout uses the wrong clock, the condvar is configured to use CLOCK_MONOTONIC
(which Instant::now
uses) above to avoid issues when changing the system time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will discuss possible solutions with @gh-tr and fix the new merge conflicts tomorrow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should now be fixed as well, thanks for spotting this bug. In pthread.rs
the correct time source has already been used, now pthread_condvcar.rs
uses the same implementation (and almost the same as e.g. Linux which is why I've merged the two wait_timeout
functions). Also compared it with the QNX documentation and example. @gh-tr please also have another look just to be sure. Thanks!
Would it be possible to elaborate on what you're suggesting here? I don't see any references to pthread_sleepon_* being used when searching the nightly builds and it's unclear to me how this can be used to avoid lazy allocation. I haven't looked in the kernel to see if it's safe to move mutex's around memory when not in a locked state, but I don't think this is what you're suggesting. As a side note, in Neutrino, mutex locking/unlocking are predominately handled in user space and only hit the kernel under contention. If there is room for optimisation, perhaps it makes sense to push it as a separate pull request? |
ca6e61a
to
4843dee
Compare
On Linux and some other platforms, fn futex_wait(addr: &AtomicU32, val: u32) {
pthread_sleepon_lock();
if addr.load(Relaxed) == val {
pthread_sleepon_wait(addr.as_mut_ptr().cast());
}
pthread_sleepon_unlock();
}
fn futex_wake(addr: &AtomicU32) {
pthread_sleepon_lock();
pthread_sleepon_signal(addr.as_mut_ptr().cast());
pthread_sleepon_unlock();
} Of course, priority inheritance is not provided by these implementations, so |
125478e
to
e5b3d1b
Compare
Thanks for the good explanation, @joboet ! As far as I understand them, pthread_sleepon_lock is using one global lock (one per process?). As long as the Rust stdlib is the only user of these functions it should be fine, but there could also be some usage outside Rust code (C/C++ code in the same process). At the moment, the pthread_sleepon_* functions are not available in the libc crate (and they are implemented as simple macros). It should not be a big deal to introduce them but it will take some time. Considering also the risk of getting a priority inversion (at least I don't have a good enough overview of the std yet to evaluate this) I would prefer to delay this optimization and to discuss with QNX Neutrino kernel experts how the best solution would be. We had this topic on our wish list anyway... Would this be okay for you, @joboet ? |
e5b3d1b
to
fb025ee
Compare
fb025ee includes a minor change in qnx-nto.md and is rebased to current master. |
This comment was marked as off-topic.
This comment was marked as off-topic.
r? @thomcc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a bunch of out-of-band chatter, and the upshot of it is that the tests don't quite pass due to the Neutrino OS on the CPUs in question only being able to handle limited concurrency, even with a retry loop, unless the retry loop is extended to a very large number of cycles.
This is currently expected to be unlikely to affect actual user code that does not strain the resources of the actual target devices that Neutrino will run on. That is because Neutrino is an RTOS, which are often intended for relatively lower-power devices. So, I think that's completely fair, and while a minimal retry loop seems good since it makes most of the tests pass, for the last few holdouts it seems better to just skip the tests right now than to add another 100 retries. That seems like it risks introducing hard-to-debug long delays in code on this OS, because we tried to "fix" things. And this is a tier 3 target.
With that, I have only one last nit.
Co-authored-by: Jubilee <46493976+workingjubilee@users.noreply.github.com>
@bors r+ |
⌛ Testing commit a510715 with merge 955007b771439aabf76ac82f58736241b2cf8707... |
💔 Test failed - checks-actions |
Spurious LLVM failure? |
☀️ Test successful - checks-actions |
Finished benchmarking commit (864b625): comparison URL. Overall result: no relevant changes - no action needed@rustbot label: -perf-regression Instruction countThis benchmark run did not return any relevant results for this metric. Max RSS (memory usage)ResultsThis is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
CyclesThis benchmark run did not return any relevant results for this metric. |
The job Click to see the possible cause of the failure (guessed by this bot)
|
…jyn514 Replace `yes` command by `while-echo` in test `tests/ui/process/process-sigpipe.rs` The `yes` command is not available on all platforms. Fixes rust-lang#108596. Inviting `@mvf` as he contributed to this patch. Thanks! This issue has been discussed in rust-lang#106673 but was moved to rust-lang#108596 to get going. CC `@gh-tr` r? `@workingjubilee` `@rustbot` label +O-neutrino Notes about the comments rust-lang#106673 (comment): - The `echo` command is `/proc/boot/echo` (not built-in) - `/bin/sh` is a symlink to `/proc/boot/ksh` ```sh # ls -l /bin/sh /proc/boot/ksh /proc/boot/echo lrwxrwxrwx 1 root root 14 Mar 20 07:52 /bin/sh -> /proc/boot/ksh -r-xr-xr-x 1 root root 9390 Sep 12 2022 /proc/boot/echo -r-xr-xr-x 1 root root 308114 Sep 12 2022 /proc/boot/ksh ```
This change:
libc
to version0.2.139
which supports QNX Neutrino@gh-tr
Tested mainly with a x86_64 virtual machine (see qnx-nto.md) and partially with an aarch64 hardware (some tests fail due to constrained resources).